Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 46
Filtrar
1.
Heliyon ; 10(5): e26416, 2024 Mar 15.
Artigo em Inglês | MEDLINE | ID: mdl-38468957

RESUMO

The emergence of federated learning (FL) technique in fog-enabled healthcare system has leveraged enhanced privacy towards safeguarding sensitive patient information over heterogeneous computing platforms. In this paper, we introduce the FedHealthFog framework, which was meticulously developed to overcome the difficulties of distributed learning in resource-constrained IoT-enabled healthcare systems, particularly those sensitive to delays and energy efficiency. Conventional federated learning approaches face challenges stemming from substantial compute requirements and significant communication costs. This is primarily due to their reliance on a singular server for the aggregation of global data, which results in inefficient training models. We present a transformational approach to address these problems by elevating strategically placed fog nodes to the position of local aggregators within the federated learning architecture. A sophisticated greedy heuristic technique is used to optimize the choice of a fog node as the global aggregator in each communication cycle between edge devices and the cloud. The FedHealthFog system notably accounts for drop in communication latency of 87.01%, 26.90%, and 71.74%, and energy consumption of 57.98%, 34.36%, and 35.37% respectively, for three benchmark algorithms analyzed in this study. The effectiveness of FedHealthFog is strongly supported by outcomes of our experiments compared to cutting-edge alternatives while simultaneously reducing number of global aggregation cycles. These findings highlight FedHealthFog's potential to transform federated learning in resource-constrained IoT environments for delay-sensitive applications.

2.
Diagnostics (Basel) ; 14(5)2024 Feb 21.
Artigo em Inglês | MEDLINE | ID: mdl-38472941

RESUMO

Malignant lymphoma, which impacts the lymphatic system, presents diverse challenges in accurate diagnosis due to its varied subtypes-chronic lymphocytic leukemia (CLL), follicular lymphoma (FL), and mantle cell lymphoma (MCL). Lymphoma is a form of cancer that begins in the lymphatic system, impacting lymphocytes, which are a specific type of white blood cell. This research addresses these challenges by proposing ensemble and non-ensemble transfer learning models employing pre-trained weights from VGG16, VGG19, DenseNet201, InceptionV3, and Xception. For the ensemble technique, this paper adopts a stack-based ensemble approach. It is a two-level classification approach and best suited for accuracy improvement. Testing on a multiclass dataset of CLL, FL, and MCL reveals exceptional diagnostic accuracy, with DenseNet201, InceptionV3, and Xception exceeding 90% accuracy. The proposed ensemble model, leveraging InceptionV3 and Xception, achieves an outstanding 99% accuracy over 300 epochs, surpassing previous prediction methods. This study demonstrates the feasibility and efficiency of the proposed approach, showcasing its potential in real-world medical applications for precise lymphoma diagnosis.

3.
Sci Rep ; 14(1): 6589, 2024 03 19.
Artigo em Inglês | MEDLINE | ID: mdl-38504098

RESUMO

Identifying and recognizing the food on the basis of its eating sounds is a challenging task, as it plays an important role in avoiding allergic foods, providing dietary preferences to people who are restricted to a particular diet, showcasing its cultural significance, etc. In this research paper, the aim is to design a novel methodology that helps to identify food items by analyzing their eating sounds using various deep learning models. To achieve this objective, a system has been proposed that extracts meaningful features from food-eating sounds with the help of signal processing techniques and deep learning models for classifying them into their respective food classes. Initially, 1200 audio files for 20 food items labeled have been collected and visualized to find relationships between the sound files of different food items. Later, to extract meaningful features, various techniques such as spectrograms, spectral rolloff, spectral bandwidth, and mel-frequency cepstral coefficients are used for the cleaning of audio files as well as to capture the unique characteristics of different food items. In the next phase, various deep learning models like GRU, LSTM, InceptionResNetV2, and the customized CNN model have been trained to learn spectral and temporal patterns in audio signals. Besides this, the models have also been hybridized i.e. Bidirectional LSTM + GRU and RNN + Bidirectional LSTM, and RNN + Bidirectional GRU to analyze their performance for the same labeled data in order to associate particular patterns of sound with their corresponding class of food item. During evaluation, the highest accuracy, precision,F1 score, and recall have been obtained by GRU with 99.28%, Bidirectional LSTM + GRU with 97.7% as well as 97.3%, and RNN + Bidirectional LSTM with 97.45%, respectively. The results of this study demonstrate that deep learning models have the potential to precisely identify foods on the basis of their sound by computing the best outcomes.


Assuntos
Aprendizado Profundo , Humanos , Reconhecimento Psicológico , Alimentos , Rememoração Mental , Registros
4.
Sci Rep ; 14(1): 5753, 2024 03 08.
Artigo em Inglês | MEDLINE | ID: mdl-38459096

RESUMO

Parasitic organisms pose a major global health threat, mainly in regions that lack advanced medical facilities. Early and accurate detection of parasitic organisms is vital to saving lives. Deep learning models have uplifted the medical sector by providing promising results in diagnosing, detecting, and classifying diseases. This paper explores the role of deep learning techniques in detecting and classifying various parasitic organisms. The research works on a dataset consisting of 34,298 samples of parasites such as Toxoplasma Gondii, Trypanosome, Plasmodium, Leishmania, Babesia, and Trichomonad along with host cells like red blood cells and white blood cells. These images are initially converted from RGB to grayscale followed by the computation of morphological features such as perimeter, height, area, and width. Later, Otsu thresholding and watershed techniques are applied to differentiate foreground from background and create markers on the images for the identification of regions of interest. Deep transfer learning models such as VGG19, InceptionV3, ResNet50V2, ResNet152V2, EfficientNetB3, EfficientNetB0, MobileNetV2, Xception, DenseNet169, and a hybrid model, InceptionResNetV2, are employed. The parameters of these models are fine-tuned using three optimizers: SGD, RMSprop, and Adam. Experimental results reveal that when RMSprop is applied, VGG19, InceptionV3, and EfficientNetB0 achieve the highest accuracy of 99.1% with a loss of 0.09. Similarly, using the SGD optimizer, InceptionV3 performs exceptionally well, achieving the highest accuracy of 99.91% with a loss of 0.98. Finally, applying the Adam optimizer, InceptionResNetV2 excels, achieving the highest accuracy of 99.96% with a loss of 0.13, outperforming other optimizers. The findings of this research signify that using deep learning models coupled with image processing methods generates a highly accurate and efficient way to detect and classify parasitic organisms.


Assuntos
Babesia , Aprendizado Profundo , Parasitos , Toxoplasma , Animais , Microscopia
5.
Cancers (Basel) ; 16(4)2024 Feb 07.
Artigo em Inglês | MEDLINE | ID: mdl-38398091

RESUMO

In the evolving landscape of medical imaging, the escalating need for deep-learningmethods takes center stage, offering the capability to autonomously acquire abstract datarepresentations crucial for early detection and classification for cancer treatment. Thecomplexities in handling diverse inputs, high-dimensional features, and subtle patternswithin imaging data are acknowledged as significant challenges in this technologicalpursuit. This Special Issue, "Recent Advances in Deep Learning and Medical Imagingfor Cancer Treatment", has attracted 19 high-quality articles that cover state-of-the-artapplications and technical developments of deep learning, medical imaging, automaticdetection, and classification, explainable artificial intelligence-enabled diagnosis for cancertreatment. In the ever-evolving landscape of cancer treatment, five pivotal themes haveemerged as beacons of transformative change. This editorial delves into the realms ofinnovation that are shaping the future of cancer treatment, focusing on five interconnectedthemes: use of artificial intelligence in medical imaging, applications of AI in cancerdiagnosis and treatment, addressing challenges in medical image analysis, advancementsin cancer detection techniques, and innovations in skin cancer classification.

8.
Sci Rep ; 13(1): 20918, 2023 Nov 27.
Artigo em Inglês | MEDLINE | ID: mdl-38017082

RESUMO

In this article, a low-complexity VLSI architecture based on a radix-4 hyperbolic COordinate Rotion DIgital Computer (CORDIC) is proposed to compute the [Formula: see text] root and [Formula: see text] power of a fixed-point number. The most recent techniques use the radix-2 CORDIC algorithm to compute the root and power. The high computation latency of radix-2 CORDIC is the primary concern for the designers. [Formula: see text] root and [Formula: see text] power computations are divided into three phases, and each phase is performed by a different class of the proposed modified radix-4 CORDIC algorithms in the proposed architecture. Although radix-4 CORDIC can converge faster with fewer recurrences, it demands more hardware resources and computational steps due to its intricate angle selection logic and variable scale factor. We have employed the modified radix-4 hyperbolic vectoring (R4HV) CORDIC to compute logarithms, radix-4 linear vectoring (R4LV) to perform division, and the modified scaling-free radix-4 hyperbolic rotation (R4HR) CORDIC to compute exponential. The criteria to select the amount of rotation in R4HV CORDIC is complicated and depends on the coordinates [Formula: see text] and [Formula: see text] of the rotating vector. In the proposed modified R4HV CORDIC, we have derived the simple selection criteria based on the fact that the inputs to R4HV CORDIC are related. The proposed criteria only depend on the coordinate [Formula: see text] that reduces the hardware complexity of the R4HV CORDIC. The R4HR CORDIC shows the complex scale factor, and compensation of such scale factor necessitates the complex hardware. The complexity of R4HR CORDIC is reduced by pre-computing the scale factor for initial iterations and by employing scaling-free rotations for later iterations. Quantitative hardware analysis suggests better hardware utilization than the recent approaches. The proposed architecture is implemented on a Virtex-6 FPGA, and FPGA implementation demonstrates [Formula: see text] less hardware utilization with better error performance than the approach with the radix-2 CORDIC algorithm.

9.
Sci Rep ; 13(1): 18475, 2023 10 27.
Artigo em Inglês | MEDLINE | ID: mdl-37891188

RESUMO

Agriculture plays a pivotal role in the economies of developing countries by providing livelihoods, sustenance, and employment opportunities in rural areas. However, crop diseases pose a significant threat to both farmers' incomes and food security. Furthermore, these diseases also show adverse effects on human health by causing various illnesses. Till date, only a limited number of studies have been conducted to identify and classify diseased cauliflower plants but they also face certain challenges such as insufficient disease surveillance mechanisms, the lack of comprehensive datasets that are properly labelled as well as are of high quality, and the considerable computational resources that are necessary for conducting thorough analysis. In view of the aforementioned challenges, the primary objective of this manuscript is to tackle these significant concerns and enhance understanding regarding the significance of cauliflower disease identification and detection in rural agriculture through the use of advanced deep transfer learning techniques. The work is conducted on the four classes of cauliflower diseases i.e. Bacterial spot rot, Black rot, Downy Mildew, and No disease which are taken from VegNet dataset. Ten deep transfer learning models such as EfficientNetB0, Xception, EfficientNetB1, MobileNetV2, EfficientNetB2, DenseNet201, EfficientNetB3, InceptionResNetV2, EfficientNetB4, and ResNet152V2, are trained and examined on the basis of root mean square error, recall, precision, F1-score, accuracy, and loss. Remarkably, EfficientNetB1 achieved the highest validation accuracy (99.90%), lowest loss (0.16), and root mean square error (0.40) during experimentation. It has been observed that our research highlights the critical role of advanced CNN models in automating cauliflower disease detection and classification and such models can lead to robust applications for cauliflower disease management in agriculture, ultimately benefiting both farmers and consumers.


Assuntos
Aprendizado Profundo , Efeitos Colaterais e Reações Adversas Relacionados a Medicamentos , Humanos , Agricultura , Gerenciamento Clínico , Pesquisa Empírica
10.
Sci Rep ; 13(1): 9937, 2023 06 19.
Artigo em Inglês | MEDLINE | ID: mdl-37336964

RESUMO

Colorectal cancer is the third most common type of cancer diagnosed annually, and the second leading cause of death due to cancer. Early diagnosis of this ailment is vital for preventing the tumours to spread and plan treatment to possibly eradicate the disease. However, population-wide screening is stunted by the requirement of medical professionals to analyse histological slides manually. Thus, an automated computer-aided detection (CAD) framework based on deep learning is proposed in this research that uses histological slide images for predictions. Ensemble learning is a popular strategy for fusing the salient properties of several models to make the final predictions. However, such frameworks are computationally costly since it requires the training of multiple base learners. Instead, in this study, we adopt a snapshot ensemble method, wherein, instead of the traditional method of fusing decision scores from the snapshots of a Convolutional Neural Network (CNN) model, we extract deep features from the penultimate layer of the CNN model. Since the deep features are extracted from the same CNN model but for different learning environments, there may be redundancy in the feature set. To alleviate this, the features are fed into Particle Swarm Optimization, a popular meta-heuristic, for dimensionality reduction of the feature space and better classification. Upon evaluation on a publicly available colorectal cancer histology dataset using a five-fold cross-validation scheme, the proposed method obtains a highest accuracy of 97.60% and F1-Score of 97.61%, outperforming existing state-of-the-art methods on the same dataset. Further, qualitative investigation of class activation maps provide visual explainability to medical practitioners, as well as justifies the use of the CAD framework in screening of colorectal histology. Our source codes are publicly accessible at: https://github.com/soumitri2001/SnapEnsemFS .


Assuntos
Neoplasias Colorretais , Redes Neurais de Computação , Humanos , Computadores , Software , Neoplasias Colorretais/diagnóstico
11.
Sci Rep ; 13(1): 5372, 2023 Apr 01.
Artigo em Inglês | MEDLINE | ID: mdl-37005398

RESUMO

Industrial Internet of Things (IIoT) seeks more attention in attaining enormous opportunities in the field of Industry 4.0. But there exist severe challenges related to data privacy and security when processing the automatic and practical data collection and monitoring over industrial applications in IIoT. Traditional user authentication strategies in IIoT are affected by single factor authentication, which leads to poor adaptability along with the increasing users count and different user categories. For addressing such issue, this paper aims to implement the privacy preservation model in IIoT using the advancements of artificial intelligent techniques. The two major stages of the designed system are the sanitization and restoration of IIoT data. Data sanitization hides the sensitive information in IIoT for preventing it from leakage of information. Moreover, the designed sanitization procedure performs the optimal key generation by a new Grasshopper-Black Hole Optimization (G-BHO) algorithm. A multi-objective function involving the parameters like degree of modification, hiding rate, correlation coefficient between the actual data and restored data, and information preservation rate was derived and utilized for generating optimal key. The simulation result establishes the dominance of the proposed model over other state-of the-art models in terms of various performance metrics. In respect of privacy preservation, the proposed G-BHO algorithm has achieved 1%, 15.2%, 12.6%, and 1% enhanced result than JA, GWO, GOA, and BHO, respectively.

12.
J Ambient Intell Humaniz Comput ; 14(7): 8459-8486, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-35039756

RESUMO

Artificial intelligence can assist providers in a variety of patient care and intelligent health systems. Artificial intelligence techniques ranging from machine learning to deep learning are prevalent in healthcare for disease diagnosis, drug discovery, and patient risk identification. Numerous medical data sources are required to perfectly diagnose diseases using artificial intelligence techniques, such as ultrasound, magnetic resonance imaging, mammography, genomics, computed tomography scan, etc. Furthermore, artificial intelligence primarily enhanced the infirmary experience and sped up preparing patients to continue their rehabilitation at home. This article covers the comprehensive survey based on artificial intelligence techniques to diagnose numerous diseases such as Alzheimer, cancer, diabetes, chronic heart disease, tuberculosis, stroke and cerebrovascular, hypertension, skin, and liver disease. We conducted an extensive survey including the used medical imaging dataset and their feature extraction and classification process for predictions. Preferred reporting items for systematic reviews and Meta-Analysis guidelines are used to select the articles published up to October 2020 on the Web of Science, Scopus, Google Scholar, PubMed, Excerpta Medical Database, and Psychology Information for early prediction of distinct kinds of diseases using artificial intelligence-based techniques. Based on the study of different articles on disease diagnosis, the results are also compared using various quality parameters such as prediction rate, accuracy, sensitivity, specificity, the area under curve precision, recall, and F1-score.

13.
Biosensors (Basel) ; 12(12)2022 Dec 09.
Artigo em Inglês | MEDLINE | ID: mdl-36551120

RESUMO

The human body is designed to experience stress and react to it, and experiencing challenges causes our body to produce physical and mental responses and also helps our body to adjust to new situations. However, stress becomes a problem when it continues to remain without a period of relaxation or relief. When a person has long-term stress, continued activation of the stress response causes wear and tear on the body. Chronic stress results in cancer, cardiovascular disease, depression, and diabetes, and thus is deeply detrimental to our health. Previous researchers have performed a lot of work regarding mental stress, using mainly machine-learning-based approaches. However, most of the methods have used raw, unprocessed data, which cause more errors and thereby affect the overall model performance. Moreover, corrupt data values are very common, especially for wearable sensor datasets, which may also lead to poor performance in this regard. This paper introduces a deep-learning-based method for mental stress detection by encoding time series raw data into Gramian Angular Field images, which results in promising accuracy while detecting the stress levels of an individual. The experiment has been conducted on two standard benchmark datasets, namely WESAD (wearable stress and affect detection) and SWELL. During the studies, testing accuracies of 94.8% and 99.39% are achieved for the WESAD and SWELL datasets, respectively. For the WESAD dataset, chest data are taken for the experiment, including the data of sensor modalities such as three-axis acceleration (ACC), electrocardiogram (ECG), body temperature (TEMP), respiration (RESP), etc.


Assuntos
Redes Neurais de Computação , Dispositivos Eletrônicos Vestíveis , Humanos , Aprendizado de Máquina , Eletrocardiografia , Estresse Psicológico
14.
Sci Rep ; 12(1): 20804, 2022 12 02.
Artigo em Inglês | MEDLINE | ID: mdl-36460697

RESUMO

Carcinoma is a primary source of morbidity in women globally, with metastatic disease accounting for most deaths. Its early discovery and diagnosis may significantly increase the odds of survival. Breast cancer imaging is critical for early identification, clinical staging, management choices, and treatment planning. In the current study, the FastAI technology is used with the ResNet-32 model to precisely identify ductal carcinoma. ResNet-32 is having few layers comparted to majority of its counterparts with almost identical performance. FastAI offers a rapid approximation toward the outcome for deep learning models via GPU acceleration and a faster callback mechanism, which would result in faster execution of the model with lesser code and yield better precision in classifying the tissue slides. Residual Network (ResNet) is proven to handle the vanishing gradient and effective feature learning better. Integration of two computationally efficient technologies has yielded a precision accuracy with reasonable computational efforts. The proposed model has shown considerable efficiency in the evaluating parameters like sensitivity, specificity, accuracy, and F1 Score against the other dominantly used deep learning models. These insights have shown that the proposed approach might assist practitioners in analyzing Breast Cancer (BC) cases appropriately, perhaps saving future complications and death. Clinical and pathological analysis and predictive accuracy have been improved with digital image processing.


Assuntos
Neoplasias da Mama , Carcinoma Ductal de Mama , Segunda Neoplasia Primária , Feminino , Humanos , Neoplasias da Mama/diagnóstico por imagem , Progressão da Doença , Aceleração
15.
Sensors (Basel) ; 22(23)2022 Dec 02.
Artigo em Inglês | MEDLINE | ID: mdl-36502150

RESUMO

The wearable healthcare equipment is primarily designed to alert patients of any specific health conditions or to act as a useful tool for treatment or follow-up. With the growth of technologies and connectivity, the security of these devices has become a growing concern. The lack of security awareness amongst novice users and the risk of several intermediary attacks for accessing health information severely endangers the use of IoT-enabled healthcare systems. In this paper, a blockchain-based secure data storage system is proposed along with a user authentication and health status prediction system. Firstly, this work utilizes reversed public-private keys combined Rivest-Shamir-Adleman (RP2-RSA) algorithm for providing security. Secondly, feature selection is completed by employing the correlation factor-induced salp swarm optimization algorithm (CF-SSOA). Finally, health status classification is performed using advanced weight initialization adapted SignReLU activation function-based artificial neural network (ASR-ANN) which classifies the status as normal and abnormal. Meanwhile, the abnormal measures are stored in the corresponding patient blockchain. Here, blockchain technology is used to store medical data securely for further analysis. The proposed model has achieved an accuracy of 95.893% and is validated by comparing it with other baseline techniques. On the security front, the proposed RP2-RSA attains a 96.123% security level.


Assuntos
Blockchain , Humanos , Redes Neurais de Computação , Algoritmos , Tecnologia , Atenção à Saúde , Segurança Computacional
17.
Diagnostics (Basel) ; 12(12)2022 Dec 06.
Artigo em Inglês | MEDLINE | ID: mdl-36553074

RESUMO

The development of genomic technology for smart diagnosis and therapies for various diseases has lately been the most demanding area for computer-aided diagnostic and treatment research. Exponential breakthroughs in artificial intelligence and machine intelligence technologies could pave the way for identifying challenges afflicting the healthcare industry. Genomics is paving the way for predicting future illnesses, including cancer, Alzheimer's disease, and diabetes. Machine learning advancements have expedited the pace of biomedical informatics research and inspired new branches of computational biology. Furthermore, knowing gene relationships has resulted in developing more accurate models that can effectively detect patterns in vast volumes of data, making classification models important in various domains. Recurrent Neural Network models have a memory that allows them to quickly remember knowledge from previous cycles and process genetic data. The present work focuses on type 2 diabetes prediction using gene sequences derived from genomic DNA fragments through automated feature selection and feature extraction procedures for matching gene patterns with training data. The suggested model was tested using tabular data to predict type 2 diabetes based on several parameters. The performance of neural networks incorporating Recurrent Neural Network (RNN) components, Long Short-Term Memory (LSTM), and Gated Recurrent Units (GRU) was tested in this research. The model's efficiency is assessed using the evaluation metrics such as Sensitivity, Specificity, Accuracy, F1-Score, and Mathews Correlation Coefficient (MCC). The suggested technique predicted future illnesses with fair Accuracy. Furthermore, our research showed that the suggested model could be used in real-world scenarios and that input risk variables from an end-user Android application could be kept and evaluated on a secure remote server.

18.
Cancers (Basel) ; 14(17)2022 Aug 29.
Artigo em Inglês | MEDLINE | ID: mdl-36077727

RESUMO

Cancerous tumor cells divide uncontrollably, which results in either tumor or harm to the immune system of the body. Due to the destructive effects of chemotherapy, optimal medications are needed. Therefore, possible treatment methods should be controlled to maintain the constant/continuous dose for affecting the spreading of cancerous tumor cells. Rapid growth of cells is classified into primary and secondary types. In giving a proper response, the immune system plays an important role. This is considered a natural process while fighting against tumors. In recent days, achieving a better method to treat tumors is the prime focus of researchers. Mathematical modeling of tumors uses combined immune, vaccine, and chemotherapies to check performance stability. In this research paper, mathematical modeling is utilized with reference to cancerous tumor growth, the immune system, and normal cells, which are directly affected by the process of chemotherapy. This paper presents novel techniques, which include Bernstein polynomial (BSP) with genetic algorithm (GA), sliding mode controller (SMC), and synergetic control (SC), for giving a possible solution to the cancerous tumor cells (CCs) model. Through GA, random population is generated to evaluate fitness. SMC is used for the continuous exponential dose of chemotherapy to reduce CCs in about forty-five days. In addition, error function consists of five cases that include normal cells (NCs), immune cells (ICs), CCs, and chemotherapy. Furthermore, the drug control process is explained in all the cases. In simulation results, utilizing SC has completely eliminated CCs in nearly five days. The proposed approach reduces CCs as early as possible.

19.
Sensors (Basel) ; 22(16)2022 Aug 16.
Artigo em Inglês | MEDLINE | ID: mdl-36015869

RESUMO

Wireless sensor networks (WSNs) have recently been viewed as the basic architecture that prepared the way for the Internet of Things (IoT) to arise. Nevertheless, when WSNs are linked with the IoT, a difficult issue arises due to excessive energy utilization in their nodes and short network longevity. As a result, energy constraints in sensor nodes, sensor data sharing and routing protocols are the fundamental topics in WSN. This research presents an enhanced smart-energy-efficient routing protocol (ESEERP) technique that extends the lifetime of the network and improves its connection to meet the aforementioned deficiencies. It selects the Cluster Head (CH) depending on an efficient optimization method derived from several purposes. It aids in the reduction of sleepy sensor nodes and decreases energy utilization. A Sail Fish Optimizer (SFO) is used to find an appropriate route to the sink node for data transfer following CH selection. Regarding energy utilization, bandwidth, packet delivery ratio and network longevity, the proposed methodology is mathematically studied, and the results have been compared to identical current approaches such as a Genetic algorithm (GA), Ant Lion optimization (ALO) and Particle Swarm Optimization (PSO). The simulation shows that in the proposed approach for the longevity of the network, there are 3500 rounds; energy utilization achieves a maximum of 0.5 Joules; bandwidth transmits the data at the rate of 0.52 MBPS; the packet delivery ratio (PDR) is at the rate of 96% for 500 nodes, respectively.


Assuntos
Redes de Comunicação de Computadores , Internet das Coisas , Algoritmos , Animais , Conservação de Recursos Energéticos , Tecnologia sem Fio
20.
Sci Rep ; 12(1): 14523, 2022 08 25.
Artigo em Inglês | MEDLINE | ID: mdl-36008545

RESUMO

With the electric power grid experiencing a rapid shift to the smart grid paradigm over a deregulated energy market, Internet of Things (IoT) based solutions are gaining prominence and innovative Peer To Peer (P2P) energy trading at micro-level are being deployed. Such advancement, however leave traditional security models vulnerable and pave the path for Blockchain, an Distributed Ledger Technology (DLT) with its decentralized, open and transparency characteristics as a viable alternative. However, due to deregulation in energy trading markets, massive volumes of micro transactions are required to be supported, which become a performance bottleneck with existing Blockchain solution such as Hyperledger, Ethereum and so on. In this paper, a lightweight 'Tangle' based framework, namely IOTA (Third generation DLT) is employed for designing an energy trading market that uses Directed Acyclic Graph (DAG) based solution that not only alleviates the reward overhead for micro-transactions but also provides scalability, quantum-proof, and high throughput of such transactions at low confirmation latency. Furthermore the Masked Authentication Messaging (MAM) protocol is used over the IOTA P2P energy trading framework that allows energy producer and consumer to share the data while maintaining the confidentiality, and facilitates the data accessibility. The Raspberry Pi 3 board along with voltage sensor (INA219) used for the setting up light node and publishing and fetching data from the Tangle. The results of the obtained benchmarking indicate low confirmation latency, high throughput, system with Hyperledger Fabric and Ethereum. Moreover, the effect of transaction rate decreases when the IOTA bundle size increases more than 10. For bundle size 5 and 10 it behaves absolutely better than any other platform. The speedy confirmation time of transactions in IOTA, is most suitable for peer to peer energy trading scenarios. This study serves as a guideline for deploying, end-to-end transaction with IOTA Distributed Ledger Technology (DLT) and improving the performance of Blockchain in the energy sector under various operating conditions.


Assuntos
Blockchain , Internet das Coisas , Segurança Computacional , Confidencialidade , Editoração
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...